Review: M. J. Sharpe, General Theory of Markov Processes
نویسندگان
چکیده
منابع مشابه
Markov Regenerative Process in Sharpe
Markov regenerative processes (MRGPs) constitute a more general class of stochastic processes than traditional Markov processes. Markovian dependency, the first-order dependency, is the simplest and most important dependency in stochastic processes. Past history of a Markov chain is summarized in the current state and the behavior of the system thereafter only depends on the current state. Sojo...
متن کاملOn General Perturbations of Symmetric Markov Processes
Let X be a symmetric right process, and let Z = {Zt, t ≥ 0} be a multiplicative functional of X that is the product of a Girsanov transform, a Girsanov transform under time-reversal and a continuous Feynman-Kac transform. In this paper we derive necessary and sufficient conditions for the strong L-continuity of the semigroup {Tt, t ≥ 0} given by Ttf(x) = Ex [Ztf(Xt)], expressed in terms of the ...
متن کاملGeneral Theory of Processes Notes
Sets subscripted with a “+” are nonnegative, and sets subscripted with a “++” are strictly positive. We write dxen := ∑ i∈N+ i+1 2n ·1[i/2n,(i+1)/2n)(x) and bxcn := ∑ i∈N+ i 2n ·1(i/2n,(i+1)/2n](x) so dxen x and bxcn x, unless x = 0, in which case bxcn = 0 for all n. We will write (Ft)0≤t≤∞ to denote a filtration. If X• is a process taking values in (E,E ), then F t = σ ({Xs ∈ A | 0 ≤ s ≤ t, A ...
متن کاملMarkov Decision Processes with General Discount Functions
In Markov Decision Processes, the discount function determines how much the reward for each point in time adds to the value of the process, and thus deeply a ects the optimal policy. Two cases of discount functions are well known and analyzed. The rst is no discounting at all, which correspond to the totaland average-reward criteria. The second case is a constant discount rate, which leads to a...
متن کاملTransition Path Theory for Markov Jump Processes
The framework of transition path theory (TPT) is developed in the context of continuous-time Markov chains on discrete state-spaces. Under assumption of ergodicity, TPT singles out any two subsets in the state-space and analyzes the statistical properties of the associated reactive trajectories, i.e. these trajectories by which the random walker transits from one subset to another. TPT gives pr...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: The Annals of Probability
سال: 1990
ISSN: 0091-1798
DOI: 10.1214/aop/1176990652